83 research outputs found

    User experience for multi-device ecosystems: challenges and opportunities

    Get PDF
    Smart devices have pervaded every aspect of humans' daily lives. Though single device UX products are relatively successful, the experience of cross-device interaction is still far from satisfactory and can be a source of frustration. Inconsistent UI styles, unclear coordination, varying fidelity, pairwise interactions, lack of understanding intent, limited data sharing and security, and other problems typically degrade the experience in a multi-device ecosystem. Redesigning the UX, tailored to multi-device ecosystems to enhance the user experience, turns out to be challenging but at the same time affording many new opportunities. This workshop brings together researchers, practitioners and developers with different backgrounds, including from fields such as computationally design, affective computing, and multimodal interaction to exchange views, share ideas, and explore future directions on UX for distributed scenarios, especially for those heterogeneous cross-device ecosystems. The topics cover but are not limited to distributed UX design, accessibility, cross-device HCI, human factors in distributed scenarios, user-centric interfaces, and multi-device ecosystems

    Multimodal System Processing in Mobile Environments

    No full text
    One major goal of multimodal system design is to support more robust performance than can be achieved with a unimodal recognition technology, such as a spoken language system. In recent years, the multimodal literatures on speech and pen input and speech and lip movements have begun developing relevant performance criteria and demonstrating a reliability advantage for multimodal architectures. In the present studies, over 2,600 utterances processed by a multimodal pen/voice system were collected during both mobile and stationary use. A new data collection infrastructure was developed, including instrumentation worn by the user while roaming, a researcher field station, and a multimodal data logger and analysis tool tailored for mobile research. Although speech recognition as a stand-alone failed more often during mobile system use, the results confirmed that a more stable multimodal architecture decreased this error rate by 1935 %. Furthermore, these findings were replicated across dif..

    Multimodal interfaces for dynamic interactive maps

    No full text
    Dynamic interactive maps with transparent but power-ful human interface capabilities are beginning to emerge for a variety of geographical information systems, in-cluding ones situated on portables for travelers, stu-dents, business and service people, and others working in field settings. In the present research, interfaces sup-porting spoken, pen-based, and multimodal input were analyze for their potential effectiveness in interacting with this new generation of map systems. Input modal-ity (speech, writing, multimodal) and map display for-mat (highly versus minimally structured) were varied in a within-subject factorial design as people completed re-alistic tasks with a simulated map system. The results identified a constellation of performance difficulties asso-ciated with speech-only map interactions, including ele-vated performance errors, spontaneous disfluencies, and lengthier task completion time-- problems that declined substantially when people could interact multimodally with the map. These performance advantages also mir-rored a strong user preference to interact multimodally. The error-proneness and unacceptability of speech-only input to maps was attributed in large part to people's difficulty generating spoken descriptions of spatial loca-tion. Analyses also indicated that map display format can be used to minimize performance errors and dis-fluencies, and map interfaces that guide users ' speech toward brevity can nearly eliminate disfiuencies. Impli-cations of this research are discussed for the design of high-performance multimodal interfaces for future map systems

    Predicting Spoken Disfluencies During Human-Computer Interaction

    No full text
    This research characterizes the spontaneous spoken disfluencies typical of human-computer interaction, and presents a predictive model accounting for their occurrence. Data were collected during three empirical studies in which people spoke or wrote to a highly interactive simulated system as they completed service transactions. The studies involved within-subject factorial designs in which the input modality and presentation format were varied. Spoken disfluency rates during human-computer interaction were documented to be substantially lower than rates typically observed during comparable human-human speech. Two separate factors, both associated with increased planning demands, were statistically related to higher disfluency rates: (1) length of utterance, and (2) lack of structure in the presentation format. Regression techniques demonstrated that a linear model based simply on utterance length accounted for over 77% of the variability in spoken disfluencies. Therefore, design methods ca..
    corecore